在结构健康监测中使用机器学习的情况变得越来越普遍,因为许多固有的任务(例如回归和分类)在开发基于条件的评估中自然而然地属于其职责。本章介绍了物理知识的机器学习概念,其中人们适应ML算法来说明工程师通常会试图建模或评估的结构。本章将演示将基于物理学的模型与数据驱动的模型相结合的灰色盒模型如何在SHM设置中提高预测能力。此处证明的方法的特殊优势是模型的推广能力,并具有在不同制度中增强的预测能力。这是一项需要评估的关键问题,或者监视数据不涵盖结构将经历的操作条件。本章将概述物理知识的ML,并在贝叶斯环境中引入了许多用于灰色盒子建模的方法。讨论的主要ML工具将是高斯过程回归,我们将证明如何通过约束,平均功能和内核设计以及最终在状态空间设置中通过约束来合并物理假设/模型。将展示一系列SHM应用程序,从负载监视离岸和航空航天结构的负载任务到长跨度桥梁的性能监控。
translated by 谷歌翻译
非线性动态系统的识别仍然是整个工程的重大挑战。这项工作提出了一种基于贝叶斯过滤的方法,以提取和确定系统中未知的非线性项的贡献,可以将其视为恢复力表面类型方法的替代观点。为了实现这种识别,最初将非线性恢复力的贡献作为高斯过程建模。该高斯过程将转换为状态空间模型,并与系统的线性动态组件结合使用。然后,通过推断过滤和平滑分布,可以提取系统的内部状态和非线性恢复力。在这些状态下,可以构建非线性模型。在模拟案例研究和实验基准数据集中,该方法被证明是有效的。
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Science tests competing theories or models by evaluating the similarity of their predictions against observational experience. Thus, how we measure similarity fundamentally determines what we learn. In machine learning and scientific modeling, similarity metrics are used as objective functions. A classic example being mean squared error, which is the optimal measure of similarity when errors are normally distributed and independent and identically distributed (iid). In many cases, however, the error distribution is neither normal nor iid, so it is left to the scientist to determine an appropriate objective. Here, we review how information theory can guide that selection, then demonstrate the approach with a simple hydrologic model.
translated by 谷歌翻译
The NASA Astrophysics Data System (ADS) is an essential tool for researchers that allows them to explore the astronomy and astrophysics scientific literature, but it has yet to exploit recent advances in natural language processing. At ADASS 2021, we introduced astroBERT, a machine learning language model tailored to the text used in astronomy papers in ADS. In this work we: - announce the first public release of the astroBERT language model; - show how astroBERT improves over existing public language models on astrophysics specific tasks; - and detail how ADS plans to harness the unique structure of scientific papers, the citation graph and citation context, to further improve astroBERT.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
频率调制连续波(FMCW)LIDAR是一种最近新兴的技术,可通过多普勒效应效率进行每次返回的瞬时相对径向速度测量。在这封信中,我们使用这些多普勒速度测量值从FMCW激光雷达(FMCW Lidar)介绍了第一个连续的一次性绕线算法算法,以帮助几何变性环境中的探测率。我们应用现有的连续时间框架,该框架使用高斯工艺回归有效地估算车辆轨迹,以补偿由于任何机械驱动的激光雷达(FMCW和非FMCW)的扫描性质而引起的运动失真。我们在几个现实世界数据集上评估了我们提出的算法,包括我们收集的公开可用数据集和数据集。我们的算法优于也使用多普勒速度测量值的唯一现有方法,我们研究了包括此额外信息在内的困难条件,可大大提高性能。我们还证明了在标称条件下使用多普勒速度测量值的情况下,仅在有和不使用多普勒速度测量的情况下,仅激光射击的前进量的最新性能。该项目的代码可以在以下网址找到:https://github.com/utiasasrl/steam_icp。
translated by 谷歌翻译
最近一年带来了电动汽车(EV)和相关基础设施/通信的大幅进步。入侵检测系统(ID)被广泛部署在此类关键基础架构中的异常检测。本文提出了一个可解释的异常检测系统(RX-ADS),用于在电动汽车中的CAN协议中进行入侵检测。贡献包括:1)基于窗口的特征提取方法; 2)基于深度自动编码器的异常检测方法; 3)基于对抗机器学习的解释生成方法。在两个基准CAN数据集上测试了提出的方法:OTID和汽车黑客。将RX-ADS的异常检测性能与这些数据集的最新方法进行了比较:HID和GID。 RX-ADS方法提出的性能与HIDS方法(OTIDS数据集)相当,并且具有超出HID和GID方法(CAR HACKING DATASET)的表现。此外,所提出的方法能够为因各种侵入而引起的异常行为产生解释。这些解释后来通过域专家使用的信息来检测异常来验证。 RX-ADS的其他优点包括:1)该方法可以在未标记的数据上进行培训; 2)解释有助于专家理解异常和根课程分析,并有助于AI模型调试和诊断,最终改善了对AI系统的用户信任。
translated by 谷歌翻译
辅助抗菌处方的人工智能(AI)提出了重大的道德问题。利用与AI驱动的系统一起利用道德框架,同时考虑特定的复杂性,可以支持道德决策以应对抗菌抗性。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译